Search results for "explainable ai"

showing 6 items of 6 documents

The Role of Explainable AI in the Research Field of AI Ethics

2023

Ethics of Artiicial Intelligence (AI) is a growing research ield that has emerged in response to the challenges related to AI. Transparency poses a key challenge for implementing AI ethics in practice. One solution to transparency issues is AI systems that can explain their decisions. Explainable AI (XAI) refers to AI systems that are interpretable or understandable to humans. The research ields of AI ethics and XAI lack a common framework and conceptualization. There is no clarity of the ield’s depth and versatility. A systematic approach to understanding the corpus is needed. A systematic review ofers an opportunity to detect research gaps and focus points. This paper presents the results…

AI ethicssystematic mapping studytutkimusetiikkateknologinen kehitysetiikkatekoälyartificial intelligenceexplainable AI
researchProduct

Can Interpretable Reinforcement Learning Manage Prosperity Your Way?

2022

Personalisation of products and services is fast becoming the driver of success in banking and commerce. Machine learning holds the promise of gaining a deeper understanding of and tailoring to customers’ needs and preferences. Whereas traditional solutions to financial decision problems frequently rely on model assumptions, reinforcement learning is able to exploit large amounts of data to improve customer modelling and decision-making in complex financial environments with fewer assumptions. Model explainability and interpretability present challenges from a regulatory perspective which demands transparency for acceptance; they also offer the opportunity for improved insight into and unde…

FOS: Computer and information sciencesComputer Science - Machine LearningArtificial Intelligence (cs.AI)Computer Science - Artificial IntelligenceGeneral Earth and Planetary SciencesAI in banking; personalized services; prosperity management; explainable AI; reinforcement learning; policy regularisationVDP::Teknologi: 500::Informasjons- og kommunikasjonsteknologi: 550General Environmental ScienceMachine Learning (cs.LG)AI; Volume 3; Issue 2; Pages: 526-537
researchProduct

Reinforcement Learning Your Way: Agent Characterization through Policy Regularization

2022

The increased complexity of state-of-the-art reinforcement learning (RL) algorithms has resulted in an opacity that inhibits explainability and understanding. This has led to the development of several post hoc explainability methods that aim to extract information from learned policies, thus aiding explainability. These methods rely on empirical observations of the policy, and thus aim to generalize a characterization of agents’ behaviour. In this study, we have instead developed a method to imbue agents’ policies with a characteristic behaviour through regularization of their objective functions. Our method guides the agents’ behaviour during learning, which results in a…

FOS: Computer and information sciencesComputer Science - Machine LearningArtificial Intelligence (cs.AI)Computer Science - Artificial Intelligenceexplainable AI; multi-agent systems; deterministic policy gradientsGeneral Earth and Planetary SciencesVDP::Teknologi: 500::Informasjons- og kommunikasjonsteknologi: 550General Environmental ScienceMachine Learning (cs.LG)
researchProduct

ML-Based Radiomics Analysis for Breast Cancer Classification in DCE-MRI

2022

Breast cancer is the most common malignancy that threatening women’s health. Although Dynamic Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) for breast lesions characterization is widely used in the clinical practice, physician grading performance is still not optimal, showing a specificity of about 72%. In this work Radiomics was used to analyze a dataset acquired with two different protocols in order to train Machine-Learning algorithms for breast cancer classification. Original radiomic features were expanded considering Laplacian of Gaussian filtering and Wavelet Transform images to evaluate whether they can improve predictive performance. A Multi-Instant features selection invo…

Settore ING-INF/05 - Sistemi Di Elaborazione Delle InformazioniRadiomicsImage processingExplainable AIMachine learning
researchProduct

NeuronAlg: An Innovative Neuronal Computational Model for Immunofluorescence Image Segmentation

2023

Background: Image analysis applications in digital pathology include various methods for segmenting regions of interest. Their identification is one of the most complex steps and therefore of great interest for the study of robust methods that do not necessarily rely on a machine learning (ML) approach. Method: A fully automatic and optimized segmentation process for different datasets is a prerequisite for classifying and diagnosing indirect immunofluorescence (IIF) raw data. This study describes a deterministic computational neuroscience approach for identifying cells and nuclei. It is very different from the conventional neural network approaches but has an equivalent quantitative and qu…

neuron physiology networksSettore INF/01 - Informaticabiomedical imaging; explainable ai; neuron physiology networks; computer-aided analysis; image segmentation; pattern analysispattern analysisElectrical and Electronic Engineeringbiomedical imagingcomputer-aided analysisimage segmentationBiochemistryInstrumentationAtomic and Molecular Physics and Opticsexplainable aiAnalytical ChemistrySensors
researchProduct

Explainable Fuzzy AI Challenge 2022 : Winner’s Approach to a Computationally Efficient and Explainable Solution

2022

An explainable artificial intelligence (XAI) agent is an autonomous agent that uses a fundamental XAI model at its core to perceive its environment and suggests actions to be performed. One of the significant challenges for these XAI agents is performing their operation efficiently, which is governed by the underlying inference and optimization system. Along similar lines, an Explainable Fuzzy AI Challenge (XFC 2022) competition was launched, whose principal objective was to develop a fully autonomous and optimized XAI algorithm that could play the Python arcade game “Asteroid Smasher”. This research first investigates inference models to implement an efficient (XAI) agent using rule-based …

päättelyoptimointifuzzy systemsTSKalgoritmiikkaälykkäät agentitsumea logiikkatekoälyAI agentsMamdani inference systemexplainable AI
researchProduct